ϵ-subgradient algorithms for bilevel convex optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incremental Stochastic Subgradient Algorithms for Convex Optimization

This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...

متن کامل

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradi...

متن کامل

"Efficient" Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a linesearch at each iteration, avoiding the need for orthogonal projections onto the feasible region (an ...

متن کامل

ε-subgradient algorithms for locally lipschitz functions on Riemannian manifolds

This paper presents a descent direction method for finding extrema of locally Lipschitz functions defined on Riemannian manifolds. To this end we define a set-valued mapping x → ∂εf(x) named ε-subdifferential which is an approximation for the Clarke subdifferential and which generalizes the Goldstein-ε-subdifferential to the Riemannian setting. Using this notion we construct a steepest descent ...

متن کامل

Technical Report Subgradient Optimization for Convex Multiparametric Programming

In this paper we develop a subgradient optimization methodology for convex multiparametric nonlinear programs. We define a parametric subgradient, extend some classical optimization results to the multiparametric case, and design a subgradient algorithm that is shown to converge under traditional conditions. We use this algorithm to solve two illustrative example problems and demonstrate its ac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Inverse Problems

سال: 2017

ISSN: 0266-5611,1361-6420

DOI: 10.1088/1361-6420/aa6136